scholarly journals Neural Network Training for Prediction of Climatological Time Series, Regularized by Minimization of the Generalized Cross-Validation Function

2000 ◽  
Vol 128 (5) ◽  
pp. 1456-1473 ◽  
Author(s):  
Yuval
2011 ◽  
Vol 201-203 ◽  
pp. 2685-2689
Author(s):  
Chong Gao ◽  
Hai Jie Ma ◽  
Pei Na Gao

To improve the accuracy of load forecasting is the focus of the load forecasting. As the daily load by various environmental factors and periodical, this makes the load time series of changes occurring during non-stationary random process. The key of improving the accuracy of artificial neural network training is to select effective training sample. This paper based on the time series forecasting techniques’ random time series autocorrelation function to select the neural network training samples. The method of modeling is more objective. By example, the comparison with autoregressive (AR) Model predictions and BP Artificial Neural Network (ANN) predicted results through error analysis and confirmed the proposed scheme good performance.


2017 ◽  
Vol 13 (7) ◽  
pp. 211-217 ◽  
Author(s):  
Paola A. Sánchez-Sánchez ◽  
José Rafael García-González

Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 711
Author(s):  
Mina Basirat ◽  
Bernhard C. Geiger ◽  
Peter M. Roth

Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are typically continuous-valued, this mutual information cannot be computed analytically and must thus be estimated, resulting in apparently inconsistent or even contradicting results in the literature. The goal of this paper is to demonstrate how information plane analysis can still be a valuable tool for analyzing neural network training. To this end, we complement the prevailing binning estimator for mutual information with a geometric interpretation. With this geometric interpretation in mind, we evaluate the impact of regularization and interpret phenomena such as underfitting and overfitting. In addition, we investigate neural network learning in the presence of noisy data and noisy labels.


Sign in / Sign up

Export Citation Format

Share Document